Goto

Collaborating Authors

 runge-kutta accelerate langevin monte carlo


Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond

Neural Information Processing Systems

Sampling with Markov chain Monte Carlo methods typically amounts to discretizing some continuous-time dynamics with numerical integration. In this paper, we establish the convergence rate of sampling algorithms obtained by discretizing smooth It\^o diffusions exhibiting fast $2$-Wasserstein contraction, based on local deviation properties of the integration scheme. In particular, we study a sampling algorithm constructed by discretizing the overdamped Langevin diffusion with the method of stochastic Runge-Kutta. For strongly convex potentials that are smooth up to a certain order, its iterates converge to the target distribution in $2$-Wasserstein distance in $\tilde{\mathcal{O}}(d\epsilon^{-2/3})$ iterations. This improves upon the best-known rate for strongly log-concave sampling based on the overdamped Langevin equation using only the gradient oracle without adjustment. Additionally, we extend our analysis of stochastic Runge-Kutta methods to uniformly dissipative diffusions with possibly non-convex potentials and show they achieve better rates compared to the Euler-Maruyama scheme on the dependence on tolerance $\epsilon$. Numerical studies show that these algorithms lead to better stability and lower asymptotic errors.

  diffusion, name change, runge-kutta accelerate langevin monte carlo, (2 more...)

Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond

Neural Information Processing Systems

Sampling with Markov chain Monte Carlo methods typically amounts to discretizing some continuous-time dynamics with numerical integration. In this paper, we establish the convergence rate of sampling algorithms obtained by discretizing smooth It\ o diffusions exhibiting fast 2 -Wasserstein contraction, based on local deviation properties of the integration scheme. In particular, we study a sampling algorithm constructed by discretizing the overdamped Langevin diffusion with the method of stochastic Runge-Kutta. For strongly convex potentials that are smooth up to a certain order, its iterates converge to the target distribution in 2 -Wasserstein distance in \tilde{\mathcal{O}}(d\epsilon {-2/3}) iterations. This improves upon the best-known rate for strongly log-concave sampling based on the overdamped Langevin equation using only the gradient oracle without adjustment.

  algorithm, diffusion, runge-kutta accelerate langevin monte carlo

Reviews: Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond

Neural Information Processing Systems

After Rebuttal: Thank you for the responses. I that believe the paper will be even stronger with the inclusion of the stochastic gradient-variant. This is a very valuable theorem, which will be useful for other theoreticians working in this field. On the other hand, to the best of my knowledge, this is the first paper that uses a stochastic Runge-Kutta integrator for sampling from strongly log-concave densities with explicit guarantees. The authors further show that their proposed numerical scheme improves upon the existing guarantees when applied to the overdamped Langevin dynamics.


Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond

Neural Information Processing Systems

Sampling with Markov chain Monte Carlo methods typically amounts to discretizing some continuous-time dynamics with numerical integration. In this paper, we establish the convergence rate of sampling algorithms obtained by discretizing smooth It\ o diffusions exhibiting fast 2 -Wasserstein contraction, based on local deviation properties of the integration scheme. In particular, we study a sampling algorithm constructed by discretizing the overdamped Langevin diffusion with the method of stochastic Runge-Kutta. For strongly convex potentials that are smooth up to a certain order, its iterates converge to the target distribution in 2 -Wasserstein distance in \tilde{\mathcal{O}}(d\epsilon {-2/3}) iterations. This improves upon the best-known rate for strongly log-concave sampling based on the overdamped Langevin equation using only the gradient oracle without adjustment.

  algorithm, diffusion, runge-kutta accelerate langevin monte carlo

Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond

Li, Xuechen, Wu, Yi, Mackey, Lester, Erdogdu, Murat A.

Neural Information Processing Systems

Sampling with Markov chain Monte Carlo methods typically amounts to discretizing some continuous-time dynamics with numerical integration. In this paper, we establish the convergence rate of sampling algorithms obtained by discretizing smooth It\ o diffusions exhibiting fast $2$-Wasserstein contraction, based on local deviation properties of the integration scheme. In particular, we study a sampling algorithm constructed by discretizing the overdamped Langevin diffusion with the method of stochastic Runge-Kutta. For strongly convex potentials that are smooth up to a certain order, its iterates converge to the target distribution in $2$-Wasserstein distance in $\tilde{\mathcal{O}}(d\epsilon {-2/3})$ iterations. This improves upon the best-known rate for strongly log-concave sampling based on the overdamped Langevin equation using only the gradient oracle without adjustment.